1,029,033 research outputs found

    Decision by sampling

    Get PDF
    We present a theory of decision by sampling (DbS) in which, in contrast with traditional models, there are no underlying psychoeconomic scales. Instead, we assume that an attribute's subjective value is constructed from a series of binary, ordinal comparisons to a sample of attribute values drawn from memory and is its rank within the sample. We assume that the sample reflects both the immediate distribution of attribute values from the current decision's context and also the background, real-world distribution of attribute values. DbS accounts for concave utility functions; losses looming larger than gains; hyperbolic temporal discounting; and the overestimation of small probabilities and the underestimation of large probabilities

    Decision by sampling

    Get PDF
    We present a theory of decision by sampling (DbS) in which, in contrast with traditional models, there are no underlying psychoeconomic scales. Instead, we assume that an attribute’s subjective value is constructed from a series of binary, ordinal comparisons to a sample of attribute values drawn from memory and is its rank within the sample. We assume that the sample reflects both the immediate distribution of attribute values from the current decision’s context and also the background, real-world distribution of attribute values. DbS accounts for concave utility functions; losses looming larger than gains; hyperbolic temporal discounting; and the overestimation of small probabilities and the underestimation of large probabilities

    Decision by sampling: the role of the decision environment in risky choice

    Get PDF
    Decision by sampling (DbS) is a theory about how our environment shapes the decisions that we make. Here, I review the application of DbS to risky decision making. According to classical theories of risky decision making, people make stable transformations between outcomes and probabilities and their subjective counterparts using fixed psychoeconomic functions. DbS offers a quite different account. In DbS, the subjective value of an outcome or probability is derived from a series of binary, ordinal comparisons with a sample of other outcomes or probabilities from the decision environment. In this way, the distribution of attribute values in the environment determines the subjective valuations of outcomes and probabilities. I show how DbS interacts with the real-world distributions of gains, losses, and probabilities to produce the classical psychoeconomic functions. I extend DbS to account for preferences in benchmark data sets. Finally, in a challenge to the classical notion of stable subjective valuations, I review evidence that manipulating the distribution of attribute values in the environment changes our subjective valuations just as DbS predicts

    Multialternative decision by sampling : a model of decision making constrained by process data

    Get PDF
    Sequential sampling of evidence, or evidence accumulation, has been implemented in a variety of models to explain a range of multialternative choice phenomena. But the existing models do not agree on what, exactly, the evidence is that is accumulated. They also do not agree on how this evidence is accumulated. In this article, we use findings from process-tracing studies to constrain the evidence accumulation process. With these constraints, we extend the decision by sampling model and propose the multialternative decision by sampling (MDbS) model. In MDbS, the evidence accumulated is outcomes of pairwise ordinal comparisons between attribute values. MDbS provides a quantitative account of the attraction, compromise, and similarity effects equal to that of other models, and captures a wider range of empirical phenomena than other models

    Share Investors' Competence and Overconfidence in Investment Decision Making

    Full text link
    Many factors may affect investors in making investment decision, some of them are overconfidence and competence. Those factors thought to have an influence on investment decision making. This research objectives to determine the effect of competence and overconfidence on investment decision. This research is a kind of quantitative research using survey method given to beginner investor. The sampling method used  judgment sampling with the number of samples in this research are 30 respondents of beginner investor. The analysis used is MRA (Multiple Regression Analysis). The results of this study showed that competence of investor does not affect in investment decision while investor's decision was influenced by overconfidence of investor. Keywords : competence, overconfidence. investment decisio

    Deciding when to decide : time-variant sequential sampling models explain the emergence of value-based decisions in the human brain

    Get PDF
    The cognitive and neuronal mechanisms of perceptual decision making have been successfully linked to sequential sampling models. These models describe the decision process as a gradual accumulation of sensory evidence over time. The temporal evolution of economic choices, however, remains largely unexplored. We tested whether sequential sampling models help to understand the formation of value-based decisions in terms of behavior and brain responses. We used functional magnetic resonance imaging (fMRI) to measure brain activity while human participants performed a buying task in which they freely decided upon how and when to choose. Behavior was accurately predicted by a time-variant sequential sampling model that uses a decreasing rather than fixed decision threshold to estimate the time point of the decision. Presupplementary motor area, caudate nucleus, and anterior insula activation was associated with the accumulation of evidence over time. Furthermore, at the beginning of the decision process the fMRI signal in these regions accounted for trial-by-trial deviations from behavioral model predictions: relatively high activation preceded relatively early responses. The updating of value information was correlated with signals in the ventromedial prefrontal cortex, left and right orbitofrontal cortex, and ventral striatum but also in the primary motor cortex well before the response itself. Our results support a view of value-based decisions as emerging from sequential sampling of evidence and suggest a close link between the accumulation process and activity in the motor system when people are free to respond at any time

    Smart Sampling for Lightweight Verification of Markov Decision Processes

    Get PDF
    Markov decision processes (MDP) are useful to model optimisation problems in concurrent systems. To verify MDPs with efficient Monte Carlo techniques requires that their nondeterminism be resolved by a scheduler. Recent work has introduced the elements of lightweight techniques to sample directly from scheduler space, but finding optimal schedulers by simple sampling may be inefficient. Here we describe "smart" sampling algorithms that can make substantial improvements in performance.Comment: IEEE conference style, 11 pages, 5 algorithms, 11 figures, 1 tabl

    Faking Fairness via Stealthily Biased Sampling

    Full text link
    Auditing fairness of decision-makers is now in high demand. To respond to this social demand, several fairness auditing tools have been developed. The focus of this study is to raise an awareness of the risk of malicious decision-makers who fake fairness by abusing the auditing tools and thereby deceiving the social communities. The question is whether such a fraud of the decision-maker is detectable so that the society can avoid the risk of fake fairness. In this study, we answer this question negatively. We specifically put our focus on a situation where the decision-maker publishes a benchmark dataset as the evidence of his/her fairness and attempts to deceive a person who uses an auditing tool that computes a fairness metric. To assess the (un)detectability of the fraud, we explicitly construct an algorithm, the stealthily biased sampling, that can deliberately construct an evil benchmark dataset via subsampling. We show that the fraud made by the stealthily based sampling is indeed difficult to detect both theoretically and empirically.Comment: Accepted at the Special Track on AI for Social Impact (AISI) at AAAI202

    Cardiac biomarkers by point-of-care testing - back to the future?

    Get PDF
    The measurement of the cardiac troponins (cTn), cardiac troponin T (cTnT) and cardiac troponin I (cTnI) are integral to the management of patients with suspected acute coronary syndromes (ACS). Patients without clear electrocardiographic evidence of myocardial infarction require measurement of cTnT or cTnI. It therefore follows that a rapid turnaround time (TAT) combined with the immediacy of results return which is achieved by point-of-care testing (POCT) offers a substantial clinical benefit. Rapid results return plus immediate decision-making should translate into improved patient flow and improved therapeutic decision-making. The development of high sensitivity troponin assays offer significant clinical advantages. Diagnostic algorithms have been devised utilising very low cut-offs at first presentation and rapid sequential measurements based on admission and 3 h sampling, most recently with admission and 1 h sampling. Such troponin algorithms would be even more ideally suited to point-of-care testing as the TAT achieved by the diagnostic laboratory of typically 60 min corresponds to the sampling interval required by the clinician using the algorithm. However, the limits of detection and analytical imprecision required to utilise these algorithms is not yet met by any easy-to-use POCT systems
    corecore